Skip to content

fix(install): limit preparer concurrency to prevent file handle exhaustion#17633

Open
denyszhak wants to merge 7 commits intoastral-sh:mainfrom
denyszhak:fix/installer-concurrency
Open

fix(install): limit preparer concurrency to prevent file handle exhaustion#17633
denyszhak wants to merge 7 commits intoastral-sh:mainfrom
denyszhak:fix/installer-concurrency

Conversation

@denyszhak
Copy link
Copy Markdown

Summary

Resolves #17512

Fixes an issue where uv could exhaust the operating system's file descriptor limit when installing projects with a large number of local dependencies, causing a "Too many open files (os error 24)" crash.

Test Plan

  1. Built uv version that has an issue
  2. Issued ulimit -n 256 to set number of allowed descriptors to a low value
  3. mkdir -p /tmp/airflow-test
  4. git clone --depth 1 https://github.com/apache/airflow /tmp/airflow-test
  5. cd /tmp/airflow-test
  6. /Users/denyszhak/personal-repos/uv/target/debug/uv sync

Bug: Hit Too many open files (os error 24) at path "/Users/denyszhak/.cache/uv/sdists-v9/editable/136c133a3434a0fa/.tmpZtqKQt"

  1. Built uv version with the fix (source branch)
  2. /Users/denyszhak/personal-repos/uv/target/debug/uv sync

Fix: Successfully Resolved 922 packages in 10.82s

@denyszhak
Copy link
Copy Markdown
Author

Not sure why test job is failing, seems unrelated to the change at first.

@konstin
Copy link
Copy Markdown
Member

konstin commented Jan 21, 2026

Yep the failure is unrelated: #17637

@konstin konstin added the bug Something isn't working label Jan 21, 2026
@konstin
Copy link
Copy Markdown
Member

konstin commented Jan 21, 2026

Interesting, I thought we were already using limits early but it seems we aren't.

CC @charliermarsh for the preparer code.

@konstin konstin requested a review from charliermarsh January 21, 2026 12:04
@denyszhak denyszhak marked this pull request as draft February 23, 2026 00:18
konstin added a commit that referenced this pull request Feb 23, 2026
Avoid problems such as #15307, follow-up to #18054. See also #17633, for which this should be helpful.
konstin added a commit that referenced this pull request Feb 23, 2026
Avoid problems such as #15307, follow-up to #18054. See also #17633, for which this should be helpful.
konstin added a commit that referenced this pull request Feb 25, 2026
Avoid problems such as #15307, follow-up to #18054. See also #17633, for which this should be helpful.
konstin added a commit that referenced this pull request Feb 25, 2026
Avoid problems such as #15307,
follow-up to #18054. See also
#17633, for which this should be
helpful.
@denyszhak denyszhak force-pushed the fix/installer-concurrency branch 2 times, most recently from 2468a97 to 949397f Compare March 2, 2026 01:11

// Acquire the concurrency permit and advisory lock.
let _permit = self.acquire_concurrency_permit().await;
let _lock = lock_shard.lock().await.map_err(Error::CacheLock)?;
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved the lock to later on purpose. If I had just added the semaphore where the old lock was, it would also start limiting revision and download work on some paths The tradeoff is that it could potentially duplicate a bit of earlier work, but we still re-check under lock before building, so final cache correctness stays the same.

Copy link
Copy Markdown
Member

@konstin konstin Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

url_revision reads and writes cache_shard.entry(HTTP_REVISION). Doesn't that mean there can now be a race as two processes read or write this file? I don't think we're completely on the wrong path here, it's just that this code is tricky for ensuring that all operations that two different processes can do are either atomic replaces (e.g., building a directory and putting it in place) or protected by a lock, and that there can't be TOCTOU problems between processes (such as process A reading that something is fitting from the metadata, then process B atomically replaces the directory, and then B reads the actual content of the switched thing that isn't fitting anymore)

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There can be a race on HTTP_REVISION but it's not a harmful one. It's not TOCTOU because we read url_revision once, and we keep it in memory, and then use revision.id() for the build. We do not re-read HTTP_REVISION later, so we don’t “check one revision, then use another".

I agree this concern is valid so I’m just trying to avoid it here.

Let me actually do more testing on this specific concern and capture some state to confirm this is harmless, like running two separate uv sync so they will share the same cache.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking about the case of two uv processes specifically, which without a lock, could read and write url_revision in parallel, if i read it correctly.

Copy link
Copy Markdown
Author

@denyszhak denyszhak Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think that read is correct. That part was intentional, the goal here was to push the semaphore/lock boundary as far down as possible so we don’t put the earlier revision/download/extract work behind semaphore as well.

I agree this is a race, but the behavior I expect from it is duplicate early work (multiple revision shards), not a later “checked one revision, built another”.

So I think the local tradeoff is:

  1. lock/semaphore later: lets more of the revision/download work run in parallel, but two processes can duplicate some work for the same source
  2. lock earlier: avoids that race on the revision, but puts more of the earlier IO behind new semaphore.

2 is the safer choice. My hesitation was mainly the earlier concern about adding more serialization around the IO-heavy part of the work.


On the test side, I added an integration test that runs two separate uv processes against the same direct-URL dependency and shared cache, with a delayed local HTTP response to widen the race window. It verifies both succeed, then runs a third uv lock --offline against the same cache. That last step checks that the cache left behind by the concurrent race is still reusable afterward. In this test, the result looked like duplicate early work rather than an unusable or obviously corrupted cache.

denyszhak@807a93b - that's how it looks

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, my re-check "under the lock" is stupid

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's two constraints we need to look for: For all the IO operations in the cache and in venvs, we need to ensure that either all writes are atomic (create a file or directory, then rename it to the target) or we hold a lock while we do them. Additionally, we need to make sure that applies to when the data is streched across e.g. a cache info file and a cache data file. What I'm trying to figure out is, can we statically, from the code, assert that that's the case here, or do we need the lock for the larger scope again?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the first constraint is mostly satisfied here, but not the second. The second looks possible to satisfy too, but it would take a bit more restructuring. Let me get this version implemented first, and then we can decide whether we want to push it further.

Copy link
Copy Markdown
Author

@denyszhak denyszhak Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For archives, I changed the pre-lock revision to be temporary only. Under the lock, we now pick or write the canonical LOCAL_REVISION, and then use that canonical revision for later, so the archive state is coherent.

For URLs, I only changed the post-lock behaviour: unnder the lock we now re-read HTTP_REVISION and use that canonical revision for the later path. I did not move the HTTP_REVISION write itself under the lock, because that write currently happens inside the cached client, and doing that cleanly would require a deferred-write refactor, which seems too broad.

If this ends up being the direction we want, I could follow-up with cleanup for repeated “re-scope to canonical revision and re-check under lock” code, but I wanted to keep this change focused on the coordination fix itself first.

@denyszhak denyszhak marked this pull request as ready for review March 15, 2026 18:15
@denyszhak denyszhak requested a review from konstin March 15, 2026 18:16
@denyszhak denyszhak force-pushed the fix/installer-concurrency branch from 1308c32 to 1484433 Compare March 29, 2026 23:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Investigate why uv holds so many file handles open

2 participants